Conference Proceedings

Fair Enough: Standardizing Evaluation and Model Selection for Fairness Research in NLP

X Han, T Baldwin, T Cohn

Eacl 2023 17th Conference of the European Chapter of the Association for Computational Linguistics Proceedings of the Conference | ASSOC COMPUTATIONAL LINGUISTICS-ACL | Published : 2023

Abstract

Modern NLP systems exhibit a range of biases, which a growing literature on model debiasing attempts to correct. However current progress is hampered by a plurality of definitions of bias, means of quantification, and oftentimes vague relation between debiasing algorithms and theoretical measures of bias. This paper seeks to clarify the current situation and plot a course for meaningful progress in fair learning, with two key contributions: (1) making clear inter-relations among the current gamut of methods, and their relation to fairness theory; and (2) addressing the practical problem of model selection, which involves a trade-off between fairness and accuracy and has led to systemic issue..

View full abstract

University of Melbourne Researchers

Grants

Awarded by Australian Research Council


Funding Acknowledgements

We thank Lea Frermann, Aili Shen, and Shivashankar Subramanian for their discussions and inputs. We thank the anonymous reviewers for their helpful feedback and suggestions. This work was funded by the Australian Research Council, Discovery grant DP200102519.